8 research outputs found

    Surface-type classification in structured planar environments under various illumination and imaging conditions

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.The recent advancement in sensing, computing and artificial intelligence, has led to the application of robots outside of the manufacturing factory and into field environments. In order for a field robot to operate intelligently and autonomously, the robot needs to build an environmental awareness, such as by classifying the different surface-types on a steel bridge structure. However, it is challenging to classify surface-types from images that are captured in a structurally complex environment under various illumination and imaging conditions. This is because colour and texture features extracted from these images can be inconsistent. This thesis presents a surface-type classification approach to classify surface-types in a structurally complex three-dimensional (3D) environment under various illumination and imaging conditions. The approach proposes RGB-D sensing to provide each pixel in an image with additional depth information that is used by two developed algorithms. The first algorithm uses the RGB-D information along with a modified reflectance model to extract colour features for colour-based classification of surface-types. The second algorithm uses the depth information to calculate a probability map for the pixels being a specific surface-type. The probability map can identify the image regions that have a high probability of being accurately classified by a texture-based classifier. A 3D grid-based map is generated to combine the results produced by colour-based classification and texture-based classification. It is suggested that a robot manipulator is used to position an RGB-D sensor package in the complex environments to capture the RGB-D images. In this way, the 3D position of each pixel is precisely known in a common global frame (robot base coordinate frame) and can be combined using a grid-based map to build up a rich awareness of the surrounding complex environment. A case study is conducted in a laboratory environment using a six degree-of-freedom robot manipulator equipped with a RGB-D sensor package mounted to the end effector. The results show that the proposed surface-type classification approach provides an improved solution for vision-based classification of surface-types in a complex structural environment with various illumination and imaging conditions

    Surface-type classification using RGB-D

    Full text link
    This paper proposes an approach to improve surface-type classification of images containing inconsistently illuminated surfaces. When a mobile inspection robot is visually inspecting surface-types in a dark environment and a directional light source is used to illuminate the surfaces, the images captured may exhibit illumination variance that can be caused by the orientation and distance of the light source relative to the surfaces. In order to accurately classify the surface-types in these images, either the training image dataset needs to completely incorporate the illumination variance or a way to extract color features that can provide high classification accuracy needs to be identified. In this paper diffused reflectance values are extracted as new color features to classifying surface-types. In this approach, Red, Green, Blue-Depth (RGB-D) data is collected from the environment, and a reflectance model is used to calculate a diffused reflectance value for a pixel in each Red, Green, Blue (RGB) color channel. The diffused reflectance values can be used to train a multiclass support vector machine classifier to classify surface-types. Experiments are conducted in a mock bridge maintenance environment using a portable RGB-Depth sensor package with an attached light source to collect surface-type data. The performance of a classifier trained with diffused reflectance values is compared against classifiers trained with other color features including RGB and Lcolor spaces. Results show that the classifier trained with the diffused reflectance values can achieve consistently higher classification accuracy than the classifiers trained with RGB and Lab features. For test images containing a single surface plane, diffused reflectance values consistently provide greater than 90% classification accuracy; and for test images containing a complex scene with multiple surface-types and surface planes, diffused reflectance values are shown to provide an increase in overall accuracy over RGB and Lab by 49.24% and 13.66%, respectively. © 2013 IEEE

    Image segmentation for surface material-type classification using 3D geometry information

    Full text link
    This paper describes a novel approach for the segmentation of complex images to determine candidates for accurate material-type classification. The proposed approach identifies classification candidates based on image quality calculated from viewing distance and angle information. The required viewing distance and angle information is extracted from 3D fused images constructed from laser range data and image data. This approach sees application in material-type classification of images captured with varying degrees of image quality attributed to geometric uncertainty of the environment typical for autonomous robotic exploration. The proposed segmentation approach is demonstrated on an autonomous bridge maintenance system and validated using gray level co-occurrence matrix (GLCM) features combined with a naive Bayes classifier. Experimental results demonstrate the effects of viewing distance and angle on classification accuracy and the benefits of segmenting images using 3D geometry information to identify candidates for accurate material-type classification. ©2010 IEEE

    A sliding window approach to exploration for 3D map building using a biologically inspired bridge inspection robot

    Full text link
    © 2015 IEEE. This paper presents a Sliding Window approach to viewpoint selection when exploring an environment using a RGB-D sensor mounted to the end-effector of an inchworm climbing robot for inspecting areas inside steel bridge archways which cannot be easily accessed by workers. The proposed exploration approach uses a kinematic chain robot model and information theory-based next best view calculations to predict poses which are safe and are able to reduce the information remaining in an environment. At each exploration step, a viewpoint is selected by analysing the Pareto efficiency of the predicted information gain and the required movement for a set of candidate poses. In contrast to previous approaches, a sliding window is used to determine candidate poses so as to avoid the costly operation of assessing the set of candidates in its entirety. Experimental results in simulation and on a prototype climbing robot platform show the approach requires fewer gain calculations and less robot movement, and therefore is more efficient than other approaches when exploring a complex 3D steel bridge structure

    A comprehensive approach to real-time fault diagnosis during automatic grit-blasting operation by autonomous industrial robots

    Full text link
    © 2017 Elsevier Ltd This paper presents a comprehensive approach to diagnose for faults that may occur during a robotic grit-blasting operation. The approach proposes the use of information collected from multiple sensors (RGB-D camera, audio and pressure transducers) to detect for 1) the real-time position of the grit-blasting spot and 2) the real-time state within the blasting line (i.e. compressed air only). The outcome of this approach will enable a grit-blasting robot to autonomous diagnose for faults and take corrective actions during the blasting operation. Experiments are conducted in a laboratory and in a grit-blasting chamber during real grit-blasting to demonstrate the proposed approach. Accuracy of 95% and above has been achieved in the experiments

    An approach for identifying classifiable regions of an image captured by autonomous robots in structural environments

    Full text link
    © 2015 Elsevier Ltd. Abstract When an autonomous robot is deployed in a structural environment to visually inspect surfaces, the capture conditions of images (e.g. camera's viewing distance and angle to surfaces) may vary due to un-ideal robot poses selected to position the camera in a collision-free manner. Given that surface inspection is conducted by using a classifier trained with surface samples captured with limited changes to the viewing distance and angle, the inspection performance can be affected if the capture conditions are changed. This paper presents an approach to calculate a value that represents the likelihood of a pixel being classifiable by a classifier trained with a limited dataset. The likelihood value is calculated for each pixel in an image to form a likelihood map that can be used to identify classifiable regions of the image. The information necessary for calculating the likelihood values is obtained by collecting additional depth data that maps to each pixel in an image (collectively referred to as a RGB-D image). Experiments to test the approach are conducted in a laboratory environment using a RGB-D sensor package mounted onto the end-effector of a robot manipulator. A naive Bayes classifier trained with texture features extracted from Gray Level Co-occurrence Matrices is used to demonstrate the effect of image capture conditions on surface classification accuracy. Experimental results show that the classifiable regions identified using a likelihood map are up to 99.0% accurate, and the identified region has up to 19.9% higher classification accuracy when compared against the overall accuracy of the same image

    Automated and frequent calibration of a robot manipulator-mounted IR range camera for steel bridge maintenance

    Full text link
    This paper presents an approach to perform frequent hand-eye calibration of an Infrared (IR) range camera mounted to the end-effector of a robot manipulator in a field environment. A set of three reflector discs arranged in a structured pattern is attached to the robot platform to provide high contrast image features with corresponding range readings for accurate calculation of the camera-to-robot base transform. Using this approach the hand-eye transform between the IR range camera and robot end-effector can be determined by considering the robot manipulator model. Experimental results show that a structured lighting-based IR range camera can be reliably hand-eye calibrated to a six DOF robot manipulator using the presented automated approach. Once calibrated, the IR range camera can be positioned with the manipulator so as to generate a high resolution geometric map of the surrounding environment suitable for performing the grit-blasting task. © Springer-Verlag Berlin Heidelberg 2014
    corecore